Different video understanding tasks are typically treated in isolation, and even with distinct types of curated data (e.g., classifying sports in one dataset, tracking animals in another). However, in wearable cameras, the immersive egocentric perspective of a person engaging with the world around them presents an interconnected web of video understanding tasks -- hand-object manipulations, navigation in the space, or human-human interactions -- that unfold continuously, driven by the person's goals. We argue that this calls for a much more unified approach. We propose EgoTask Translation (EgoT2), which takes a collection of models optimized on separate tasks and learns to translate their outputs for improved performance on any or all of them at once. Unlike traditional transfer or multi-task learning, EgoT2's flipped design entails separate task-specific backbones and a task translator shared across all tasks, which captures synergies between even heterogeneous tasks and mitigates task competition. Demonstrating our model on a wide array of video tasks from Ego4D, we show its advantages over existing transfer paradigms and achieve top-ranked results on four of the Ego4D 2022 benchmark challenges.
translated by 谷歌翻译
培训计算机视觉模型通常需要在各种场景配置和属性集中收集和标记大量图像。这个过程非常耗时,并且要确保捕获的数据分布映射到应用程序方案的目标域,这是一项挑战。最近,综合数据已成为解决这两个问题的一种方式。但是,现有方法要么要求人类专家手动调整每个场景属性,要么使用几乎无法控制的自动方法;这需要渲染大量的随机数据变化,这很慢,对于目标域通常是次优的。我们介绍了第一个完全可区分的合成数据管道,该数据管道使用具有目标应用程序损耗函数的闭环中的神经辐射场(NERF)。我们的方法可以在没有人工的情况下生成数据,以最大程度地提高目标任务的准确性。我们说明了我们方法对合成和现实对象检测任务的有效性。我们还引入了一个新的“ YCB野外”数据集和基准标准,该数据集和基准为对象检测提供了一种在现实世界环境中具有多种姿势的测试方案。
translated by 谷歌翻译
弱监督的对象检测(WSOD)使对象检测器能够使用图像级类标签训练对象检测器。但是,当前WSOD模型的实际应用是有限的,因为它们在小规模上运行,需要进行广泛的培训和精致。我们提出了弱监督的检测变压器,该变压器可以有效地从大规模预处理数据集到数百个新物体的WSOD列表有效地转移。我们利用预处理的知识来改善WSOD中使用的多个实例学习框架,并且实验表明我们的方法的表现优于数据集上的最新方法,其新颖类是本文的两倍。
translated by 谷歌翻译
视觉注意力有助于在人类视野中的噪音,腐败和分布变化下实现强大的感知,这是现代神经网络仍然缺乏的领域。我们介绍了Vars,来自复发性稀疏重建的视觉注意力,这是一种基于人类视觉注意机制的两个突出特征的新注意力公式:复发性和稀疏性。相关特征通过神经元之间的复发连接组合在一起,而显着物体通过稀疏正则化出现。 VARS采用带有复发连接的吸引子网络,随着时间的流逝,它会收敛到稳定的模式。网络层表示为普通微分方程(ODES),将注意力作为一个经常性吸引子网络表示,该网络等效地使用编码基本数据模式的“模板”字典优化输入的稀疏重建。我们表明,自我注意力是具有单步优化的VAR的特殊情况,没有稀疏性约束。 VAR可以很容易地用作替代流行视觉变形金刚的自我注意力,从而不断提高其在各种基准测试中的稳健性。代码在GitHub(https://github.com/bfshi/vars)上发布。
translated by 谷歌翻译
对比度学习依赖于假设正对包含相关视图,例如,视频的图像或视频的共同发生的多峰信号,其共享关于实例的某些基础信息。但如果违反了这个假设怎么办?该文献表明,对比学学习在存在嘈杂的视图中产生次优表示,例如,没有明显共享信息的假正对。在这项工作中,我们提出了一种新的对比损失函数,这是对嘈杂的观点的强大。我们通过显示嘈杂二进制分类的强大对称损失的连接提供严格的理论理由,并通过基于Wassersein距离测量来建立新的对比界限进行新的对比。拟议的损失是完全的方式无话无双,并且对Innoconce损失的更换简单的替代品,这使得适用于现有的对比框架。我们表明,我们的方法提供了在展示各种现实世界噪声模式的图像,视频和图形对比学习基准上的一致性改进。
translated by 谷歌翻译
肺超声(LUS)可能是唯一可用于连续和周期性监测肺的医学成像方式。这对于在肺部感染开始期间跟踪肺表现或跟踪疫苗接种对肺部的影响非常有用,如Covid-19中的肺部作用。有许多尝试将肺严重程度分为各个类别或自动分割各种LUS地标和表现形式的尝试。但是,所有这些方法均基于训练静态机器学习模型,该模型需要大量临床注释的大数据集,并且在计算上是沉重的,并且大部分时间非现实时间。在这项工作中,提出了一种实时重量的基于活跃的学习方法,以在资源约束设置中在COVID-19的受试者中更快地进行分类。该工具基于您看起来仅一次(YOLO)网络,具有基于各种LUS地标,人工制品和表现形式的标识,肺部感染严重程度的预测,基于主动学习的可能性,提供图像质量的能力。临床医生的反馈或图像质量以及对感染严重程度高的重要框架的汇总,以进一步分析。结果表明,对于LUS地标的预测,该提议的工具在联合(IOU)阈值的交叉点上的平均平均精度(MAP)为66%。在Quadro P4000 GPU运行时,14MB轻量级Yolov5S网络可实现123 fps。该工具可根据作者的要求进行使用和分析。
translated by 谷歌翻译
The development of social media user stance detection and bot detection methods rely heavily on large-scale and high-quality benchmarks. However, in addition to low annotation quality, existing benchmarks generally have incomplete user relationships, suppressing graph-based account detection research. To address these issues, we propose a Multi-Relational Graph-Based Twitter Account Detection Benchmark (MGTAB), the first standardized graph-based benchmark for account detection. To our knowledge, MGTAB was built based on the largest original data in the field, with over 1.55 million users and 130 million tweets. MGTAB contains 10,199 expert-annotated users and 7 types of relationships, ensuring high-quality annotation and diversified relations. In MGTAB, we extracted the 20 user property features with the greatest information gain and user tweet features as the user features. In addition, we performed a thorough evaluation of MGTAB and other public datasets. Our experiments found that graph-based approaches are generally more effective than feature-based approaches and perform better when introducing multiple relations. By analyzing experiment results, we identify effective approaches for account detection and provide potential future research directions in this field. Our benchmark and standardized evaluation procedures are freely available at: https://github.com/GraphDetec/MGTAB.
translated by 谷歌翻译
Interview has been regarded as one of the most crucial step for recruitment. To fully prepare for the interview with the recruiters, job seekers usually practice with mock interviews between each other. However, such a mock interview with peers is generally far away from the real interview experience: the mock interviewers are not guaranteed to be professional and are not likely to behave like a real interviewer. Due to the rapid growth of online recruitment in recent years, recruiters tend to have online interviews, which makes it possible to collect real interview data from real interviewers. In this paper, we propose a novel application named EZInterviewer, which aims to learn from the online interview data and provides mock interview services to the job seekers. The task is challenging in two ways: (1) the interview data are now available but still of low-resource; (2) to generate meaningful and relevant interview dialogs requires thorough understanding of both resumes and job descriptions. To address the low-resource challenge, EZInterviewer is trained on a very small set of interview dialogs. The key idea is to reduce the number of parameters that rely on interview dialogs by disentangling the knowledge selector and dialog generator so that most parameters can be trained with ungrounded dialogs as well as the resume data that are not low-resource. Evaluation results on a real-world job interview dialog dataset indicate that we achieve promising results to generate mock interviews. With the help of EZInterviewer, we hope to make mock interview practice become easier for job seekers.
translated by 谷歌翻译
Dynamic treatment regimes assign personalized treatments to patients sequentially over time based on their baseline information and time-varying covariates. In mobile health applications, these covariates are typically collected at different frequencies over a long time horizon. In this paper, we propose a deep spectral Q-learning algorithm, which integrates principal component analysis (PCA) with deep Q-learning to handle the mixed frequency data. In theory, we prove that the mean return under the estimated optimal policy converges to that under the optimal one and establish its rate of convergence. The usefulness of our proposal is further illustrated via simulations and an application to a diabetes dataset.
translated by 谷歌翻译
Temporal sentence grounding (TSG) aims to identify the temporal boundary of a specific segment from an untrimmed video by a sentence query. All existing works first utilize a sparse sampling strategy to extract a fixed number of video frames and then conduct multi-modal interactions with query sentence for reasoning. However, we argue that these methods have overlooked two indispensable issues: 1) Boundary-bias: The annotated target segment generally refers to two specific frames as corresponding start and end timestamps. The video downsampling process may lose these two frames and take the adjacent irrelevant frames as new boundaries. 2) Reasoning-bias: Such incorrect new boundary frames also lead to the reasoning bias during frame-query interaction, reducing the generalization ability of model. To alleviate above limitations, in this paper, we propose a novel Siamese Sampling and Reasoning Network (SSRN) for TSG, which introduces a siamese sampling mechanism to generate additional contextual frames to enrich and refine the new boundaries. Specifically, a reasoning strategy is developed to learn the inter-relationship among these frames and generate soft labels on boundaries for more accurate frame-query reasoning. Such mechanism is also able to supplement the absent consecutive visual semantics to the sampled sparse frames for fine-grained activity understanding. Extensive experiments demonstrate the effectiveness of SSRN on three challenging datasets.
translated by 谷歌翻译